14 research outputs found

    A scaling-invariant algorithm for linear programming whose running time depends only on the constraint matrix

    Get PDF
    Following the breakthrough work of Tardos (Oper. Res. '86) in the bit-complexity model, Vavasis and Ye (Math. Prog. '96) gave the first exact algorithm for linear programming in the real model of computation with running time depending only on the constraint matrix. For solving a linear program (LP) max cx, Ax = b, x ≥ 0, A g m × n, Vavasis and Ye developed a primal-dual interior point method using a g€layered least squares' (LLS) step, and showed that O(n3.5 log(χA+n)) iterations suffice to solve (LP) exactly, where χA is a condition measure controlling the size of solutions to linear systems related to A. Monteiro and Tsuchiya (SIAM J. Optim. '03), noting that the central path is invariant under rescalings of the columns of A and c, asked whether there exists an LP algorithm depending instead on the measure χA∗, defined as the minimum χAD value achievable by a column rescaling AD of A, and gave strong evidence that this should be the case. We resolve this open question affirmatively. Our first main contribution is an O(m2 n2 + n3) time algorithm which works on the linear matroid of A to compute a nearly optimal diagonal rescaling D satisfying χAD ≤ n(χ∗)3. This algorithm also allows us to approximate the value of χA up to a factor n (χ∗)2. This result is in (surprising) contrast to that of Tunçel (Math. Prog. '99), who showed NP-hardness for approximating χA to within 2poly(rank(A)). The key insight for our algorithm is to work with ratios gi/gj of circuits of A - i.e., minimal linear dependencies Ag=0 - which allow us to approximate the value of χA∗ by a maximum geometric mean cycle computation in what we call the g€circuit ratio digraph' of A. While this resolves Monteiro and Tsuchiya's question by appropriate preprocessing, it falls short of providing either a truly scaling invariant algorithm or an improvement upon the base LLS analysis. In this vein, as our second main contribution we develop a scaling invariant LLS algorithm, which uses and dynamically maintains improving estimates of the circuit ratio digraph, together with a refined potential function based analysis for LLS algorithms in general. With this analysis, we derive an improved O(n2.5 lognlog(χA∗+n)) iteration bound for optimally solving (LP) using our algorithm. The same argument also yields a factor n/logn improvement on the iteration complexity bound of the original Vavasis-Ye algorithm

    A Dual and Interior Point Approach to Solve Convex Min-Max Problems

    Full text link
    Abstract In this paper we propose an interior point method for solving the dual form of minmax type problems The dual variables are updated by means of a scaling supergradient method The boundary of the dual feasible region is avoided by the use of a logarithmic barrier function A major dierence with other interior point methods is the nonsmoothness of the objective functio

    Identifying The Optimal Face Of A Network Linear Program With A Globally Convergent Interior Point Method

    No full text
    . Based on recent convergence results for the affine scaling algorithm for linear programming, we investigate strategies to identify the optimal face of a minimum cost network flow problem. In the computational experiments described, one of the proposed optimality indicators is used to implement an early stopping criterion in dlnet, an implementation of the dual affine scaling algorithm for solving minimum cost network flow problems. We conclude from the experiments that the new indicator is far more robust than the one used in earlier versions of dlnet. Key words. Linear programming, minimum cost network flow, indicator, affine scaling algorithm, computer implementation. AMS(MOS) subject classifications. 65-05, 65F10, 65K05, 65Y05, 90C05, 90C06, 90C35 1. Introduction. The dual affine scaling (das) algorithm [3] has been shown to perform well in practice on linear programming problems [1, 2, 7, 8], large-scale network flow problems [13], and large-scale assignment problems [11, 12]. ..

    Interior Point Methods For Global Optimization

    No full text
    Interior point methods, originally invented in the context of linear programming, have found a much broader range of applications, including global optimization problems that arise in engineering, computer science, operations research, and other disciplines. This chapter overviews the conceptual basis and applications of interior point methods for some classes of global optimization problems

    A strategy of global convergence for the affine scaling algorithm for convex semidefinite programming

    No full text
    The affine scaling algorithm is one of the earliest interior point methods developed for linear programming. This algorithm is simple and elegant in terms of its geometric interpretation, but it is notoriously difficult to prove its convergence. It often requires additional restrictive conditions such as nondegeneracy, specific initial solutions, and/or small step length to guarantee its global convergence. This situation is made worse when it comes to applying the affine scaling idea to the solution of semidefinite optimization problems or more general convex optimization problems. In (Math Program 83(1–3):393–406, 1998), Muramatsu presented an example of linear semidefinite programming, for which the affine scaling algorithm with either short or long step converges to a non-optimal point. This paper aims at developing a strategy that guarantees the global convergence for the affine scaling algorithm in the context of linearly constrained convex semidefinite optimization in a least restrictive manner. We propose a new rule of step size, which is similar to the Armijo rule, and prove that such an affine scaling algorithm is globally convergent in the sense that each accumulation point of the sequence generated by the algorithm is an optimal solution as long as the optimal solution set is nonempty and bounded. The algorithm is least restrictive in the sense that it allows the problem to be degenerate and it may start from any interior feasible point
    corecore